1,162 research outputs found

    Theoretical investigations of the heaviest elements: benchmark accuracy and reliable uncertainty

    Get PDF
    The aim of the research presented in this Thesis is ab initio high accuracy investigations of atomic and molecular properties of heavy and superheavy elements. State of the art relativistic coupled cluster approach is applied to calculations of ionization potentials and electron affinities of these atoms and molecular properties of the compounds that they form. The important effects of relativity were investigated in depth for these heavy species. We have performed extensive investigations of the effect of the various computational parameters on the calculated properties, which allowed us to devise a reliable scheme for assigning realistic uncertainties on our predictions. The results of the research will assist in the challenging experimental investigations of the heavy and superheavy elements and provide new knowledge on the influence of relativity on electronic structure and properties

    PELA: Learning Parameter-Efficient Models with Low-Rank Approximation

    Full text link
    Applying a pre-trained large model to downstream tasks is prohibitive under resource-constrained conditions. Recent dominant approaches for addressing efficiency issues involve adding a few learnable parameters to the fixed backbone model. This strategy, however, leads to more challenges in loading large models for downstream fine-tuning with limited resources. In this paper, we propose a novel method for increasing the parameter efficiency of pre-trained models by introducing an intermediate pre-training stage. To this end, we first employ low-rank approximation to compress the original large model and then devise a feature distillation module and a weight perturbation regularization module. These modules are specifically designed to enhance the low-rank model. In particular, we update only the low-rank model while freezing the backbone parameters during pre-training. This allows for direct and efficient utilization of the low-rank model for downstream fine-tuning tasks. The proposed method achieves both efficiencies in terms of required parameters and computation time while maintaining comparable results with minimal modifications to the backbone architecture. Specifically, when applied to three vision-only and one vision-language Transformer models, our approach often demonstrates a merely ∼\sim0.6 point decrease in performance while reducing the original parameter size by 1/3 to 2/3

    Mining Conditional Part Semantics with Occluded Extrapolation for Human-Object Interaction Detection

    Full text link
    Human-Object Interaction Detection is a crucial aspect of human-centric scene understanding, with important applications in various domains. Despite recent progress in this field, recognizing subtle and detailed interactions remains challenging. Existing methods try to use human-related clues to alleviate the difficulty, but rely heavily on external annotations or knowledge, limiting their practical applicability in real-world scenarios. In this work, we propose a novel Part Semantic Network (PSN) to solve this problem. The core of PSN is a Conditional Part Attention (CPA) mechanism, where human features are taken as keys and values, and the object feature is used as query for the computation in a cross-attention mechanism. In this way, our model learns to automatically focus on the most informative human parts conditioned on the involved object, generating more semantically meaningful features for interaction recognition. Additionally, we propose an Occluded Part Extrapolation (OPE) strategy to facilitate interaction recognition under occluded scenarios, which teaches the model to extrapolate detailed features from partially occluded ones. Our method consistently outperforms prior approaches on the V-COCO and HICO-DET datasets, without external data or extra annotations. Additional ablation studies validate the effectiveness of each component of our proposed method.Comment: Preprin

    Element Recognition and Innovation Transformation of Cultural and Creative Products: Based on Eye Movement Experiment

    Get PDF
    This paper analyzes tourists’ perceived preferences for cultural and creative product elements using human-computer interaction technology and constructs the innovation and transformation path of cultural and creative products from four dimensions: concept, elements, content, and structure. The Great Wall tourism cultural and creative products are used as an example. The findings demonstrate that: (1) From a behavioral data viewpoint, cultural and creative items’ overall inventiveness, formal design, manufacturing method, area, cultural collection value, and function have varying degrees of influence on visitors’ perceived preferences; (2) The richness and attraction of character expression, action, and form components from the hotspot map and matrix map can boost the visual engagement impact of visitors. Scenic area architecture may enhance visitors’ immersion experiences of local culture since it serves as the design prototype for cultural and creative businesses. (3) The number of fixation points, total fixation time, and saccade frequency of cultural and creative products with various design elements differ significantly when viewed from the perspective of the eye movement index, and these differences are further presented as individualized tourist behavior characteristics. (4) From a design standpoint, it is essential that the circumstances of the product satisfy the needs of visitors in order to produce high-quality cultural and creative products. Innovative ideas should be used to steer the innovation and transformation of cultural and creative products, enhancing the universal design of products with element innovation, enhancing the cultural legacies of products with content innovation, and lengthening the market cycle of products with structural innovation. The use of modern technology broadens the research methodologies for the tourism field and creates new research environments for tourism experimentation

    The internet hospital as a telehealth model in China: Systematic search and content analysis

    Get PDF
    Background: The internet hospital is an innovative organizational form and service mode under the tide of internet plus in the Chinese medical industry. It is the product of the interaction between consumer health needs and supply-side reform. However, there has still been no systematic summary of its establishment and definition, nor has there been an analysis of its service content. Objective: The primary purpose of this study was to understand the definition, establishment, and development status of internet hospitals. Methods: Data on internet hospitals were obtained via the Baidu search engine for results up until January 1, 2019. Based on the results of the search, we obtained more detailed information from the official websites and apps of 130 online hospitals and formed a database for descriptive analysis. Results: By January 2019, the number of registered internet hospitals had expanded to approximately 130 in 25 provinces, accounting for 73.5% of all provinces or province-level municipalities in China. Internet hospitals, as a new telehealth model, are distinct but overlap with online health, telemedicine, and mobile medical. They offer four kinds of services—convenience services, online medical services, telemedicine, and related industries. In general, there is an underlying common treatment flowchart of care in ordinary and internet hospitals. There are three different sponsors—government-led integration, hospital-led, and enterprise-led internet hospitals—for which stakeholders have different supporting content and responsibilities. Conclusions: Internet hospitals are booming in China, and it is the joint effort of the government and the market to alleviate the coexistence of shortages of medical resources and wasted medical supplies. The origin of internet hospitals in the eastern and western regions, the purpose of the establishment initiator, and the content of online and offline services are different. Only further standardized management and reasonable industry freedom can realize the original intention of the internet hospital of meeting various health needs.publishedVersio

    Ionization potentials and electron affinity of oganesson with relativistic coupled cluster method

    Get PDF
    We present high accuracy relativistic coupled cluster calculations of the first and second ionisation potentials and the electron affinity of the heaviest element in the Periodic Table, Og. The results were extrapolated to the basis set limit and augmented with the higher order excitations (up to perturbative quadruples), the Breit contribution, and the QED self energy and vacuum polarisation corrections. We have performed an extensive investigation of the effect of the various computational parameters on the calculated properties, which allowed us to assign realistic uncertainties on our predictions. Similar study on the lighter homologue of Og, Rn, yields excellent agreement with experiment for the first ionisation potential and a reliable prediction for the second ionisation potential

    Ionization potentials and electron affinity of oganesson

    Full text link
    We present high accuracy relativistic coupled cluster calculations of the first and second ionisation potentials and the electron affinity of the heaviest element in the Periodic Table, Og. The results were extrapolated to the basis set limit and augmented with the higher order excitations (up to perturbative quadruples), the Breit contribution, and the QED self energy and vacuum polarisation corrections. We have performed an extensive investigation of the effect of the various computational parameters on the calculated properties, which allowed us to assign realistic uncertainties on our predictions. Similar study on the lighter homologue of Og, Rn, yields excellent agreement with experiment for the first ionisation potential and a reliable prediction for the second ionisation potential

    Towards Generalizable Deepfake Detection by Primary Region Regularization

    Full text link
    The existing deepfake detection methods have reached a bottleneck in generalizing to unseen forgeries and manipulation approaches. Based on the observation that the deepfake detectors exhibit a preference for overfitting the specific primary regions in input, this paper enhances the generalization capability from a novel regularization perspective. This can be simply achieved by augmenting the images through primary region removal, thereby preventing the detector from over-relying on data bias. Our method consists of two stages, namely the static localization for primary region maps, as well as the dynamic exploitation of primary region masks. The proposed method can be seamlessly integrated into different backbones without affecting their inference efficiency. We conduct extensive experiments over three widely used deepfake datasets - DFDC, DF-1.0, and Celeb-DF with five backbones. Our method demonstrates an average performance improvement of 6% across different backbones and performs competitively with several state-of-the-art baselines.Comment: 12 pages. Code and Dataset: https://github.com/xaCheng1996/PRL

    ELIP: Efficient Language-Image Pre-training with Fewer Vision Tokens

    Full text link
    Learning a versatile language-image model is computationally prohibitive under a limited computing budget. This paper delves into the \emph{efficient language-image pre-training}, an area that has received relatively little attention despite its importance in reducing computational cost and footprint. To that end, we propose a vision token pruning and merging method ELIP, to remove less influential tokens based on the supervision of language outputs. Our method is designed with several strengths, such as being computation-efficient, memory-efficient, and trainable-parameter-free, and is distinguished from previous vision-only token pruning approaches by its alignment with task objectives. We implement this method in a progressively pruning manner using several sequential blocks. To evaluate its generalization performance, we apply ELIP to three commonly used language-image pre-training models and utilize public image-caption pairs with 4M images for pre-training. Our experiments demonstrate that with the removal of ~30%\% vision tokens across 12 ViT layers, ELIP maintains significantly comparable performance with baselines (∼\sim0.32 accuracy drop on average) over various downstream tasks including cross-modal retrieval, VQA, image captioning, \emph{etc}. In addition, the spared GPU resources by our ELIP allow us to scale up with larger batch sizes, thereby accelerating model pre-training and even sometimes enhancing downstream model performance
    • …
    corecore